Prototyping and validating hardware-software components, sub-systems and systems within the intelligent transportation system-of-systems framework requires a modular yet flexible and open-access ecosystem. This work presents our attempt towards developing such a comprehensive research and education ecosystem, called AutoDRIVE, for synergistically prototyping, simulating and deploying cyber-physical solutions pertaining to autonomous driving as well as smart city management. AutoDRIVE features both software as well as hardware-in-the-loop testing interfaces with openly accessible scaled vehicle and infrastructure components. The ecosystem is compatible with a variety of development frameworks, and supports both single and multi-agent paradigms through local as well as distributed computing. Most critically, AutoDRIVE is intended to be modularly expandable to explore emergent technologies, and this work highlights various complementary features and capabilities of the proposed ecosystem by demonstrating four such deployment use-cases: (i) autonomous parking using probabilistic robotics approach for mapping, localization, path planning and control; (ii) behavioral cloning using computer vision and deep imitation learning; (iii) intersection traversal using vehicle-to-vehicle communication and deep reinforcement learning; and (iv) smart city management using vehicle-to-infrastructure communication and internet-of-things.
translated by 谷歌翻译
Quantifying the perceptual similarity of two images is a long-standing problem in low-level computer vision. The natural image domain commonly relies on supervised learning, e.g., a pre-trained VGG, to obtain a latent representation. However, due to domain shift, pre-trained models from the natural image domain might not apply to other image domains, such as medical imaging. Notably, in medical imaging, evaluating the perceptual similarity is exclusively performed by specialists trained extensively in diverse medical fields. Thus, medical imaging remains devoid of task-specific, objective perceptual measures. This work answers the question: Is it necessary to rely on supervised learning to obtain an effective representation that could measure perceptual similarity, or is self-supervision sufficient? To understand whether recent contrastive self-supervised representation (CSR) may come to the rescue, we start with natural images and systematically evaluate CSR as a metric across numerous contemporary architectures and tasks and compare them with existing methods. We find that in the natural image domain, CSR behaves on par with the supervised one on several perceptual tests as a metric, and in the medical domain, CSR better quantifies perceptual similarity concerning the experts' ratings. We also demonstrate that CSR can significantly improve image quality in two image synthesis tasks. Finally, our extensive results suggest that perceptuality is an emergent property of CSR, which can be adapted to many image domains without requiring annotations.
translated by 谷歌翻译
Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention. However, the large majority of neural PDE solvers only apply to rectilinear domains, and do not systematically address the imposition of Dirichlet/Neumann boundary conditions over irregular domain boundaries. In this paper, we present a framework to neurally solve partial differential equations over domains with irregularly shaped (non-rectilinear) geometric boundaries. Our network takes in the shape of the domain as an input (represented using an unstructured point cloud, or any other parametric representation such as Non-Uniform Rational B-Splines) and is able to generalize to novel (unseen) irregular domains; the key technical ingredient to realizing this model is a novel approach for identifying the interior and exterior of the computational grid in a differentiable manner. We also perform a careful error analysis which reveals theoretical insights into several sources of error incurred in the model-building process. Finally, we showcase a wide variety of applications, along with favorable comparisons with ground truth solutions.
translated by 谷歌翻译
Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels. In a carefully controlled comparison study, we show that caption-supervised CNNs trained on a standard cross-entropy loss (with image labels assigned by scanning captions for class names) can exhibit greater distributional robustness than VL models trained on the same data. To facilitate future experiments with high-accuracy caption-supervised models, we introduce CaptionNet (https://github.com/penfever/CaptionNet/), which includes a class-balanced, fully supervised dataset with over 50,000 new human-labeled ImageNet-compliant samples which includes web-scraped captions. In a series of experiments on CaptionNet, we show how the choice of loss function, data filtration and supervision strategy enable robust computer vision. We also provide the codebase necessary to reproduce our experiments at VL Hub (https://github.com/penfever/vlhub/).
translated by 谷歌翻译
遗憾已被广泛用作评估分布式多代理系统在线优化算法的性能的首选指标。但是,与代理相关的数据/模型变化可以显着影响决策,并需要在代理之间达成共识。此外,大多数现有的作品都集中在开发(强烈或非严格地)凸出的方法上,对于一般非凸损失的分布式在线优化中的遗憾界限,几乎没有得到很少的结果。为了解决这两个问题,我们提出了一种新型的综合遗憾,并使用新的基于网络的基于遗憾的度量标准来评估分布式在线优化算法。我们具体地定义了复合遗憾的静态和动态形式。通过利用我们的综合遗憾的动态形式,我们开发了一种基于共识的在线归一化梯度(CONGD)的伪convex损失方法,事实证明,它显示了与最佳器路径变化的规律性术语有关的透明性行为。对于一般的非凸损失,我们首先阐明了基于最近进步的分布式在线非凸学习的遗憾,因此没有确定性算法可以实现sublinear的遗憾。然后,我们根据离线优化的Oracle开发了分布式的在线非凸优化(Dinoco),而无需进入梯度。迪诺科(Dinoco)被证明是统一的遗憾。据我们所知,这是对一般分布在线非convex学习的第一个遗憾。
translated by 谷歌翻译
变压器体系结构在许多最新应用程序中取得了显着进展。然而,尽管他们取得了成功,但现代变形金刚依赖于自我发挥的机制,其时间和空间复杂性在输入的长度上是二次的。已经提出了几种方法来加快自我注意力的机制以实现次级运行时间。但是,这些作品中的绝大多数并不伴随着严格的错误保证。在这项工作中,我们在许多情况下就自我注意的计算复杂性建立了下限。我们证明,自我注意力的时间复杂性在输入长度上必定是二次的,除非强烈的指数时间假设(SETH)是错误的。即使注意力计算仅执行大约和各种注意力机制,该论点也存在。作为对我们的下限的补充,我们表明确实可以使用有限的泰勒级数在线性时间中近似点产物自我发作,而成本依赖于多项式顺序。
translated by 谷歌翻译
我们介绍了Net2Brain,这是一种图形和命令行的用户界面工具箱,用于比较人工深神经网络(DNNS)和人脑记录的代表空间。尽管不同的工具箱仅促进单个功能或仅关注一小部分监督图像分类模型,但Net2Brain允许提取600多个受过培训的DNN的激活,以执行各种视觉相关的任务(例如,语义段,深度估计,深度估计,深度估计,深度估计,估计,深度率,在图像和视频数据集上均具有动作识别等)。该工具箱在这些激活上计算代表性差异矩阵(RDM),并使用代表性相似性分析(RSA),加权RSA(在特定的ROI和探照灯搜索中)将其与大脑记录进行比较。此外,可以在工具箱中添加一个新的刺激和大脑记录数据集以进行评估。我们通过一个示例展示了如何使用Net2Brain的功能和优势来检验认知计算神经科学的假设。
translated by 谷歌翻译
5G边缘计算启用医学互联网(IOMT)是一项有效的技术,可提供分散的医疗服务,而设备到设备(D2D)通信是未来5G网络的有希望的范式。为了确保5G边缘计算中的安全可靠的通信和启用D2D的IOMT系统,本文介绍了一种智能的信任云管理方法。首先,提出了一种积极的培训机制来构建标准信任云。其次,可以通过推断和推荐来建立IOMT设备的个人信任云。第三,提出了一种信任分类方案来确定IOMT设备是否恶意。最后,提出了一种信任云更新机制,以使所提出的信任管理方法适应性和智能在开放的无线介质下。仿真结果表明,所提出的方法可以有效解决信任不确定性问题并提高恶意设备的检测准确性。
translated by 谷歌翻译
知识蒸馏是将“知识”从大型模型(教师)转移到更紧凑的(学生)的过程,通常在模型压缩的背景下使用。当两个模型都具有相同的体系结构时,此过程称为自distillation。几项轶事表明,一个自灭的学生可以在持有的数据上胜过老师的表现。在这项工作中,我们系统地研究了许多设置。我们首先表明,即使有一个高度准确的老师,自我介绍也使学生在所有情况下都可以超越老师。其次,我们重新审视了(自我)蒸馏的现有理论解释,并确定矛盾的例子,揭示了这些解释的可能缺点。最后,我们通过损失景观几何形状的镜头为自我鉴定的动态提供了另一种解释。我们进行了广泛的实验,以表明自我验证会导致最小化的最小值,从而导致更好的概括。
translated by 谷歌翻译
在大规模数据集中训练的最先进的图像分类器(例如ImageNet)已被证明容易受到一系列故意和偶然分配变化的影响。另一方面,已经出现了一些最近具有有利分布(OOD)鲁棒性特性的分类器,在其目标任务上达到了高度准确性,同时保持其在挑战性基准方面的分配精度。我们对广泛发布的模型进行了荟萃分析,其中大多数在过去的十二个月中已经发布。通过这项荟萃分析,我们从经验上确定了所有表现最佳的OOD模型的四个主要共同点,所有这些模型都阐明了视力语言预训练的巨大希望。
translated by 谷歌翻译